416 research outputs found

    The 'law of requisite variety' may assist climate change negotiations:a review of the Kyoto and Durban meetings

    Get PDF
    Ashby wrote about cybernetics, during which discourse he described a Law that attempts to resolve difficulties arising in complex situations – he suggested using variety to combat complexity. In this paper, we note that the delegates to the UN Framework Convention on Climate Change (UNFCCC) meeting in Kyoto, 1997, were offered a ‘simplifying solution’ to cope with the complexity of discussing multiple pollutants allegedly contributing to ‘climate change’. We assert that the adoption of CO2eq has resulted in imprecise thinking regarding the ‘carbon footprint’ – that is, ‘CO2’ – to the exclusion of other pollutants. We propose, as Ashby might have done, that the CO2eq and other factors within the ‘climate change’ negotiations be disaggregated to allow careful and specific individual solutions to be agreed on each factor. We propose a new permanent and transparent ‘action group’ be in charge of agenda setting and to manage the messy annual meetings. This body would be responsible for achieving accords at these annual meetings, rather than forcing this task on national hosts. We acknowledge the task is daunting and we recommend moving on from Ashby's Law to Beer's Viable Systems approach

    Perceptual Context in Cognitive Hierarchies

    Full text link
    Cognition does not only depend on bottom-up sensor feature abstraction, but also relies on contextual information being passed top-down. Context is higher level information that helps to predict belief states at lower levels. The main contribution of this paper is to provide a formalisation of perceptual context and its integration into a new process model for cognitive hierarchies. Several simple instantiations of a cognitive hierarchy are used to illustrate the role of context. Notably, we demonstrate the use context in a novel approach to visually track the pose of rigid objects with just a 2D camera

    Active interoceptive inference and the emotional brain

    Get PDF
    We review a recent shift in conceptions of interoception and its relationship to hierarchical inference in the brain. The notion of interoceptive inference means that bodily states are regulated by autonomic reflexes that are enslaved by descending predictions from deep generative models of our internal and external milieu. This re-conceptualization illuminates several issues in cognitive and clinical neuroscience with implications for experiences of selfhood and emotion. We first contextualize interoception in terms of active (Bayesian) inference in the brain, highlighting its enactivist (embodied) aspects. We then consider the key role of uncertainty or precision and how this might translate into neuromodulation. We next examine the implications for understanding the functional anatomy of the emotional brain, surveying recent observations on agranular cortex. Finally, we turn to theoretical issues, namely, the role of interoception in shaping a sense of embodied self and feelings. We will draw links between physiological homoeostasis and allostasis, early cybernetic ideas of predictive control and hierarchical generative models in predictive processing. The explanatory scope of interoceptive inference ranges from explanations for autism and depression, through to consciousness. We offer a brief survey of these exciting developments

    Self-explaining AI as an alternative to interpretable AI

    Full text link
    The ability to explain decisions made by AI systems is highly sought after, especially in domains where human lives are at stake such as medicine or autonomous vehicles. While it is often possible to approximate the input-output relations of deep neural networks with a few human-understandable rules, the discovery of the double descent phenomena suggests that such approximations do not accurately capture the mechanism by which deep neural networks work. Double descent indicates that deep neural networks typically operate by smoothly interpolating between data points rather than by extracting a few high level rules. As a result, neural networks trained on complex real world data are inherently hard to interpret and prone to failure if asked to extrapolate. To show how we might be able to trust AI despite these problems we introduce the concept of self-explaining AI. Self-explaining AIs are capable of providing a human-understandable explanation of each decision along with confidence levels for both the decision and explanation. For this approach to work, it is important that the explanation actually be related to the decision, ideally capturing the mechanism used to arrive at the explanation. Finally, we argue it is important that deep learning based systems include a "warning light" based on techniques from applicability domain analysis to warn the user if a model is asked to extrapolate outside its training distribution. For a video presentation of this talk see https://www.youtube.com/watch?v=Py7PVdcu7WY& .Comment: 10pgs, 2 column forma

    An agent-based approach for the dynamic and decentralized service reconfiguration in collaborative production scenarios

    Get PDF
    Future industrial systems endorse the implementation of innovative paradigms addressing the continuous flexibility, reconfiguration, and evolution to face the volatility of dynamic markets demanding complex and customized products. Smart manufacturing relies on the capability to adapt and evolve to face changes, particularly by identifying, on-the-fly, opportunities to reconfigure its behavior and functionalities and offer new and more adapted services. This paper introduces an agent-based approach for service reconfiguration that allows the identification of the opportunities for reconfiguration in a pro-active and dynamic manner, and the implementation on-the-fly of the best strategies for the service reconfiguration that will lead to a better production efficiency. The developed prototype for a flexible manufacturing system case study allowed to verify the feasibility of greedy local service reconfiguration for competitive and collaborative industrial automation situations.info:eu-repo/semantics/publishedVersio

    Self-Organization Leads to Supraoptimal Performance in Public Transportation Systems

    Get PDF
    The performance of public transportation systems affects a large part of the population. Current theory assumes that passengers are served optimally when vehicles arrive at stations with regular intervals. In this paper, it is shown that self-organization can improve the performance of public transportation systems beyond the theoretical optimum by responding adaptively to local conditions. This is possible because of a “slower-is-faster” effect, where passengers wait more time at stations but total travel times are reduced. The proposed self-organizing method uses “antipheromones” to regulate headways, which are inspired by the stigmergy (communication via environment) of some ant colonies

    "Meaning" as a sociological concept: A review of the modeling, mapping, and simulation of the communication of knowledge and meaning

    Full text link
    The development of discursive knowledge presumes the communication of meaning as analytically different from the communication of information. Knowledge can then be considered as a meaning which makes a difference. Whereas the communication of information is studied in the information sciences and scientometrics, the communication of meaning has been central to Luhmann's attempts to make the theory of autopoiesis relevant for sociology. Analytical techniques such as semantic maps and the simulation of anticipatory systems enable us to operationalize the distinctions which Luhmann proposed as relevant to the elaboration of Husserl's "horizons of meaning" in empirical research: interactions among communications, the organization of meaning in instantiations, and the self-organization of interhuman communication in terms of symbolically generalized media such as truth, love, and power. Horizons of meaning, however, remain uncertain orders of expectations, and one should caution against reification from the meta-biological perspective of systems theory

    A Minimal Model of Metabolism Based Chemotaxis

    Get PDF
    Since the pioneering work by Julius Adler in the 1960's, bacterial chemotaxis has been predominantly studied as metabolism-independent. All available simulation models of bacterial chemotaxis endorse this assumption. Recent studies have shown, however, that many metabolism-dependent chemotactic patterns occur in bacteria. We hereby present the simplest artificial protocell model capable of performing metabolism-based chemotaxis. The model serves as a proof of concept to show how even the simplest metabolism can sustain chemotactic patterns of varying sophistication. It also reproduces a set of phenomena that have recently attracted attention on bacterial chemotaxis and provides insights about alternative mechanisms that could instantiate them. We conclude that relaxing the metabolism-independent assumption provides important theoretical advances, forces us to rethink some established pre-conceptions and may help us better understand unexplored and poorly understood aspects of bacterial chemotaxis

    Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems

    Get PDF
    A generic mechanism - networked buffering - is proposed for the generation of robust traits in complex systems. It requires two basic conditions to be satisfied: 1) agents are versatile enough to perform more than one single functional role within a system and 2) agents are degenerate, i.e. there exists partial overlap in the functional capabilities of agents. Given these prerequisites, degenerate systems can readily produce a distributed systemic response to local perturbations. Reciprocally, excess resources related to a single function can indirectly support multiple unrelated functions within a degenerate system. In models of genome:proteome mappings for which localized decision-making and modularity of genetic functions are assumed, we verify that such distributed compensatory effects cause enhanced robustness of system traits. The conditions needed for networked buffering to occur are neither demanding nor rare, supporting the conjecture that degeneracy may fundamentally underpin distributed robustness within several biotic and abiotic systems. For instance, networked buffering offers new insights into systems engineering and planning activities that occur under high uncertainty. It may also help explain recent developments in understanding the origins of resilience within complex ecosystems. \ud \u

    Motor Adaptation Scaled by the Difficulty of a Secondary Cognitive Task

    Get PDF
    Background: Motor learning requires evaluating performance in previous movements and modifying future movements. The executive system, generally involved in planning and decision-making, could monitor and modify behavior in response to changes in task difficulty or performance. Here we aim to identify the quantitative cognitive contribution to responsive and adaptive control to identify possible overlap between cognitive and motor processes. Methodology/Principal Findings: We developed a dual-task experiment that varied the trial-by-trial difficulty of a secondary cognitive task while participants performed a motor adaptation task. Subjects performed a difficulty-graded semantic categorization task while making reaching movements that were occasionally subjected to force perturbations. We find that motor adaptation was specifically impaired on the most difficult to categorize trials. Conclusions/Significance: We suggest that the degree of decision-level difficulty of a particular categorization differentially burdens the executive system and subsequently results in a proportional degradation of adaptation. Our results suggest
    corecore